pli.edu
pli.edu

Technology: legal gaps expose UK election to disinformation threat

Ruth GreenMonday 17 June 2024

Ahead of the UK general election in July, there are concerns the country’s light-handed approach to regulating artificial intelligence (AI) has left the electorate vulnerable to disinformation and risks further undermining public trust in democracy.

‘The UK has been very vocal about its position to take a very light touch to regulating tech companies as well as AI technologies’, says Kelsey Farish, a media and entertainment lawyer and generative AI expert based in London. She says much of the UK’s hesitancy to regulate AI emerged after the country left the EU and adopted a more flexible, ‘pro-innovation’ approach. What that means, in practice, ‘is that the politicians of the day say they’re going to leave it to the market to decide what to do.’

As well as exacerbating the asymmetry of power between Big Tech and individual users, Farish believes this approach has put the UK on the backfoot compared with the EU – where landmark legislation, the AI Act, was recently passed. As a result, concerning gaps in regulation have been created. ‘We have these regulatory gaps that are only being filled by Ofcom [the communications regulator], the Electoral Commission, industry bodies and so on, who are coming up with their own codes of practice and regulatory standards to shape behaviour’, she says.

A chance to bridge these gaps was dashed when the Artificial Intelligence Regulation Bill failed to survive the government’s ‘wash-up’ process before Parliament abruptly dissolved on 30 May, ahead of polling day on 4 July. Julian Hamblin, Senior Vice-Chair of the IBA Technology Committee and a partner at UK firm Trethowans, says the legislation could have been a gamechanger for the sector. ‘It was going to introduce a sort of AI authority’, he says. ‘That authority would have been tasked with looking both at how the regulators were aligning themselves or working together on all of this, and seeing whether the legislation that’s in place was actually fit for purpose for dealing with the needs.’


Most of the actors in the AI field are large, multinational corporations and whatever the effects of the EU AI Act on how they operate, it will impact on how they want to operate in the UK

Julian Hamblin
Senior Vice-Chair, IBA Technology Committee

In lieu of such laws, there’s growing pressure on regulators to step in. A recent report from the Alan Turing Institute called on the media and electoral regulators to issue joint guidance on the fair use of AI by political parties in election campaigning.

Sam Stockwell, a research associate at the Centre for Emerging Technology and Security and the report’s lead author, says the UK needs some kind of accountability mechanism to help avoid malicious content generation. ‘Of course, we can’t introduce any emergency legislation or regulation before the election’, he says, ‘but at least we could have some kind of ground rules that political parties then sign up to voluntarily […] and frame it around protecting election integrity.’

Stockwell points to the Irish Electoral Commission, which published a voluntary framework ahead of recent local elections aimed at helping online platforms and search engines tackle misinformation. He says it’s not ‘unfeasible’ to think its UK counterpart might introduce similar guidance before polling day.

‘We recognise the challenges posed by AI but regulating the content of campaign material would require a new legal framework,’ says a spokesperson for the UK Electoral Commission. ‘We will be monitoring the impact at the general election and stand ready to be part of conversations about its impact on voter trust and confidence going forward.’

An Ofcom spokesperson told Global Insight it would ‘consider carefully’ the Alan Turing Institute report’s recommendations, but stressed that the conduct of political parties – including their use of AI during a general election campaign – fell outside of its remit.

A recent BBC investigation found that young UK voters are being recommended AI-generated videos containing misinformation, faked footage and abusive comments, fuelling fears such content could sway votes in the upcoming election.

However, there’s little recent evidence to suggest AI-generated fake content has successfully upset elections in Europe or further afield. ‘When you actually look at polling data, for instance, in places like Argentina, Poland or Slovakia, and when you map when those viral deepfakes have occurred, you actually don’t see any major drop in or changing support by voters for those candidates’, says Stockwell. ‘That’s not to say that we might not see that moving forward, of course, but, at least so far, the evidence hasn’t shown that.’

But as lawmakers in US states such as California rush to propose hundreds of AI bills, Farish stresses interest in this area is far from waning. ‘The concerns are not new’, she says. ‘What’s different now is the scale at which this type of fake, misleading or harmful content is spread.’ This is the real issue, she believes, and the reason why it’s attracting so much legislative and academic attention, as well as interest from legal practitioners.

In the UK, Hamblin says the next government will probably modify its position and push for greater regulation in this space. ‘Most of the actors in the AI field are large, multinational corporations and whatever the effects of the EU AI Act on how they operate, it will impact on how they want to operate in the UK’, he says. ‘We want to encourage AI innovation and make the UK a global centre for it, but we’ve got to make sure that there are suitable public safeguards in terms of how it’s actually used and delivered in our country too.’

Image: Iryna/AdobeStock.com

AI and tech: in focus

The IBA has created a dedicated page collating its content on artificial intelligence and technology issues.